16 research outputs found

    TransForm: Formally Specifying Transistency Models and Synthesizing Enhanced Litmus Tests

    Full text link
    Memory consistency models (MCMs) specify the legal ordering and visibility of shared memory accesses in a parallel program. Traditionally, instruction set architecture (ISA) MCMs assume that relevant program-visible memory ordering behaviors only result from shared memory interactions that take place between user-level program instructions. This assumption fails to account for virtual memory (VM) implementations that may result in additional shared memory interactions between user-level program instructions and both 1) system-level operations (e.g., address remappings and translation lookaside buffer invalidations initiated by system calls) and 2) hardware-level operations (e.g., hardware page table walks and dirty bit updates) during a user-level program's execution. These additional shared memory interactions can impact the observable memory ordering behaviors of user-level programs. Thus, memory transistency models (MTMs) have been coined as a superset of MCMs to additionally articulate VM-aware consistency rules. However, no prior work has enabled formal MTM specifications, nor methods to support their automated analysis. To fill the above gap, this paper presents the TransForm framework. First, TransForm features an axiomatic vocabulary for formally specifying MTMs. Second, TransForm includes a synthesis engine to support the automated generation of litmus tests enhanced with MTM features (i.e., enhanced litmus tests, or ELTs) when supplied with a TransForm MTM specification. As a case study, we formally define an estimated MTM for Intel x86 processors, called x86t_elt, that is based on observations made by an ELT-based evaluation of an Intel x86 MTM implementation from prior work and available public documentation. Given x86t_elt and a synthesis bound as input, TransForm's synthesis engine successfully produces a set of ELTs including relevant ELTs from prior work.Comment: *This is an updated version of the TransForm paper that features updated results reflecting performance optimizations and software bug fixes. 14 pages, 11 figures, Proceedings of the 47th Annual International Symposium on Computer Architecture (ISCA

    Serberus: Protecting Cryptographic Code from Spectres at Compile-Time

    Full text link
    We present Serberus, the first comprehensive mitigation for hardening constant-time (CT) code against Spectre attacks (involving the PHT, BTB, RSB, STL and/or PSF speculation primitives) on existing hardware. Serberus is based on three insights. First, some hardware control-flow integrity (CFI) protections restrict transient control-flow to the extent that it may be comprehensively considered by software analyses. Second, conformance to the accepted CT code discipline permits two code patterns that are unsafe in the post-Spectre era. Third, once these code patterns are addressed, all Spectre leakage of secrets in CT programs can be attributed to one of four classes of taint primitives--instructions that can transiently assign a secret value to a publicly-typed register. We evaluate Serberus on cryptographic primitives in the OpenSSL, Libsodium, and HACL* libraries. Serberus introduces 21.3% runtime overhead on average, compared to 24.9% for the next closest state-of-the-art software mitigation, which is less secure.Comment: Authors' version; to appear in the Proceedings of the IEEE Symposium on Security and Privacy (S&P) 202

    Axiomatic hardware-software contracts for security

    Get PDF
    We propose leakage containment models (LCMs)—novel axiomatic security contracts which support formally reasoning about the security guarantees of programs when they run on particular microarchitectures. Our core contribution is an axiomatic vocabulary for formalizing LCMs, derived from the established axiomatic vocabulary for formalizing processor memory consistency models. Using this vocabulary, we formalize microarchitectural leakage—focusing on leakage through hardware memory systems—so that it can be automatically detected in programs and provide a taxonomy for classifying said leakage by severity. To illustrate the efficacy of LCMs, we first demonstrate that our leakage definition faithfully captures a sampling of (transient and non-transient) microarchitectural attacks from the literature. Second, we develop a static analysis tool based on LCMs which automatically identifies Spectre vulnerabilities in programs and scales to analyze real-world crypto-libraries

    Hardware-Software Codesign for Mitigating Spectre

    Get PDF
    Spectre attacks exploit control- and data-flow (mis)prediction on modern processors to transiently leak program secrets. Comprehensively mitigating Spectre leakage is hard, and doing so while preserving the program’s performance is even harder: no existing Spectre mitigations are widely deployed due to their high overhead or high complexity. We claim that a comprehensive, efficient, and low-complexity mitigation for Spectre attacks requires engaging in software-compiler-hardware co-design. In our talk, we will pitch such a co-designed Spectre mitigation that will be widely deployable at a low cost in security-critical applications. As a first step towards this goal, we have developed Serberus, a comprehensive and proven-correct Spectre mitigation for constant-time code that targets existing hardware. We are currently exploring lightweight hardware support to improve Serberus’ performance in other application domains

    nl2spec: Interactively Translating Unstructured Natural Language to Temporal Logics with Large Language Models

    Get PDF
    A rigorous formalization of desired system requirements is indispensable when performing any verification task. This often limits the application of verification techniques, as writing formal specifications is an error-prone and time-consuming manual task. To facilitate this, we present nl2spec, a framework for applying Large Language Models (LLMs) to derive formal specifications (in temporal logics) from unstructured natural language. In particular, we introduce a new methodology to detect and resolve the inherent ambiguity of system requirements in natural language: we utilize LLMs to map subformulas of the formalization back to the corresponding natural language fragments of the input. Users iteratively add, delete, and edit these sub-translations to amend erroneous formalizations, which is easier than manually redrafting the entire formalization. The framework is agnostic to specific application domains and can be extended to similar specification languages and new neural models. We perform a user study to obtain a challenging dataset, which we use to run experiments on the quality of translations. We provide an open-source implementation, including a web-based frontend

    Effects of auditory sleep modulation approaches on brain oscillatory and cardiovascular dynamics

    Full text link
    Slow waves, the hallmark feature of deep nonrapid eye movement sleep, do potentially drive restorative effects of sleep on brain and body functions. Sleep modulation techniques to elucidate the functional role of slow waves thus have gained large interest. Auditory slow wave stimulation is a promising tool; however, directly comparing auditory stimulation approaches within a night and analyzing induced dynamic brain and cardiovascular effects are yet missing. Here, we tested various auditory stimulation approaches in a windowed, 10 s ON (stimulations) followed by 10 s OFF (no stimulations), within-night stimulation design and compared them to a SHAM control condition. We report the results of three studies and a total of 51 included nights and found a large and global increase in slow-wave activity (SWA) in the stimulation window compared to SHAM. Furthermore, slow-wave dynamics were most pronouncedly increased at the start of the stimulation and declined across the stimulation window. Beyond the changes in brain oscillations, we observed, for some conditions, a significant increase in the mean interval between two heartbeats within a stimulation window, indicating a slowing of the heart rate, and increased heart rate variability derived parasympathetic activity. Those cardiovascular changes were positively correlated with the change in SWA, and thus, our findings provide insight into the potential of auditory slow wave enhancement to modulate cardiovascular restorative conditions during sleep. However, future studies need to investigate whether the potentially increased restorative capacity through slow-wave enhancements translates into a more rested cardiovascular system on a subsequent day

    A Study of the Learnability of Relational Properties: Model Counting Meets Machine Learning (MCML)

    Full text link
    This paper introduces the MCML approach for empirically studying the learnability of relational properties that can be expressed in the well-known software design language Alloy. A key novelty of MCML is quantification of the performance of and semantic differences among trained machine learning (ML) models, specifically decision trees, with respect to entire (bounded) input spaces, and not just for given training and test datasets (as is the common practice). MCML reduces the quantification problems to the classic complexity theory problem of model counting, and employs state-of-the-art model counters. The results show that relatively simple ML models can achieve surprisingly high performance (accuracy and F1-score) when evaluated in the common setting of using training and test datasets - even when the training dataset is much smaller than the test dataset - indicating the seeming simplicity of learning relational properties. However, MCML metrics based on model counting show that the performance can degrade substantially when tested against the entire (bounded) input space, indicating the high complexity of precisely learning these properties, and the usefulness of model counting in quantifying the true performance

    Concurrency and Security Verification in Heterogeneous Parallel Systems

    No full text
    To achieve performance scaling at manageable power and thermal levels, modern systems architects employ parallelism along with high degrees of hardware specialization and heterogeneity. Unfortunately, the power and performance improvements afforded by heterogeneous parallelism come at the cost of significantly increased design complexity, with different components being programmed differently and accessing shared resources differently. This design complexity in turn presents challenges for architects who need to devise mechanisms for orchestrating, enforcing, and verifying the correctness and security of executing applications. As it turns out, software-level correctness and security problems can result from problematic hardware event orderings and interleavings that take place when an application executes on a particular hardware implementation. Since hardware designs are complex, and since a single user-facing instruction can exhibit a variety of different hardware execution event sequences, analyzing and verifying systems for correct and secure orderings and interleavings of these events is challenging. To address this issue, this dissertation combines hardware systems architecture approaches with formal methods techniques to support the specification, analysis, and verification of implementation-aware event ordering scenarios. The specific goal here is enabling automatic synthesis of implementation-aware programs capable of violating correctness or security guarantees when such programs exist. First, this dissertation presents TriCheck, an approach and tool for conducting full-stack memory consistency model verification (from high-level programming languages down through hardware implementations). Using rigorous and efficient formal approaches, TriCheck identified flaws in the 2016 RISC-V memory model specification and two counterexamples to a previously proven-correct compiler mapping scheme from C11 onto Power and ARMv7. Second, after making the important observation that memory consistency model and security analyses are amenable to similar approaches, this thesis presents CheckMate, an approach and tool for conducting hardware security verification. CheckMate uses formal techniques to evaluate susceptibility of a hardware system design to formally-specified security exploit classes. When a design is susceptible, proof-of-concept exploit codes are synthesized. CheckMate automatically synthesized programs representative of Meltdown and Spectre and new exploits, MeltdownPrime and SpectrePrime. Third, this dissertation presents approaches for handling memory model heterogeneity in hardware systems, focusing on correctness and highlighting applicability of the proposed techniques to security
    corecore